Discussion of AI tends to follow one of two patterns. The first pattern begins with a complex analysis of how the technology works, breaking down the basics of Neural Networks before moving on to the most advanced concept that the article’s author was able to understand after a few minutes on Google, and eventually ending with a vague prediction of the technology’s impact on various fields. The second pattern begins with a grandiose description of a utopian/dystopian future dominated by a godlike AI, before going on to sheepishly admit that such predictions have been common, and commonly wrong, for more than sixty years. It will be the goal of this article to fall into neither pattern, and to instead break down the ways in which the security industry is being changed right now, today, and the ways in which it is likely to change in the next year or so.
Generative AI in security today
Microsoft is a major investor in ChatGPT’s creator, OpenAI. Though the tool is currently in a closed beta, users are being added and an open enrollment is likely to be announced any day now, as Microsoft and their competitors rush to capitalize on the opportunity.
On the other side of the proverbial fence, security leaders see attackers taking their first steps in the automation of offensive security. These tools are less publicized, and it’s common to see a researcher make a tentative first post about their results only to quickly delete that post when they realize the scope of their actions. AI tools have already seen which can perform the initial stages of an attack, discovering open ports, identifying technology versions and applying appropriate exploits. It’s a given that far more powerful, far more sophisticated versions are available to the Nation-state actors and other advanced persistent threats roaming the digital wilds.
Security leaders are, in short, amid an arms race. Attackers seek to create unstoppable weapons, while defenders try desperately to construct immovable barricades. Each side tries to conceal their own discoveries while learning from those of their opponents.
12 months of curious minds
One of the core challenges in developing a new technology is its replicability. The scientists working on the Manhattan Project couldn’t just classify the process of enriching uranium, or the process of creating a bomb using a combination of classical and nuclear materials, because those were relatively minor innovations, the sort of thing any competent researcher would eventually discover. Instead, they had to classify entire fields of science, hiding away some of the most groundbreaking achievements in the history of humankind. They had to do this because innovations are fundamentally repeatable. If a discovery is true, and an accurate depiction of the world, then any sufficiently skilled person will rediscover that thing given the right preconditions. The best anyone can do is find the root concept and hide that, as the scientists on the Manhattan Project did with nuclear physics.
When it comes to AI, particularly generative AI, however, that cat is already out of the bag. It doesn’t matter how many security leaders write alarmist letters, the simple fact is that the technology is too well understood, too widespread and too easily replicated to be locked away. This is compounded by the fact that AI itself also serves to increase the pace at which knowledge spreads. In other words, the better the AI someone has today, the harder it is to keep them from getting an even better one tomorrow. After all, the single most dangerous event in the hacking world is when an APT’s tools get leaked to the public.
Threat intelligence will no longer be a matter of tracking trends and producing graphics to demonstrate the value of the program. As more and more of the offensive and defensive functions are put onto AI, the key differentiator will be how, and on what, that AI is trained. Threat intelligence will not be able to focus on volume but will instead have to tighten its focus on quality, discovering and disseminating information as quickly as possible. If an exploit is made public in hacker communities, every attacker will be able to deploy it in an extremely short timeframe. The companies which are reliant on old-school RSS-based threat intelligence will rapidly find themselves victimized by attackers capable of instantly utilizing cutting edge TTPs as soon as those TTPs are leaked.
Twelve months from now, a security program will live and die based on how well it discovers information about potential attackers, and how well it conceals information from those same attackers. Every other consideration, barring that of the minimum resources needed to participate in this arms race, will fall by the wayside.
The truth is that no one, not even the most powerful AI models, can predict the future. However, the current trend is away from tools and toward knowledge, as it becomes more and more unlikely that any single tool will be able to compete in breadth or depth with AI. The dissemination and control of information, even information at the most fundamental levels, is the battleground on which the AI wars will be decided.